Optimization and Nonlinear Equations

نویسنده

  • Gordon K. Smyth
چکیده

Gordon K. Smyth May 1997 Optimization means to nd that value of x which maximizes or minimizes a given function f(x). The idea of optimization goes to the heart of statistical methodology, as it is involved in solving statistical problems based on least squares, maximum likelihood, posterior mode and so on. A closely related problem is that of solving a nonlinear equation, g(x) = 0 for x where g is a possibly multivariate function. Many algorithms for minimizing f(x) are in fact derived from algorithms for solving g = @f=@x = 0, where @f=@x is the vector of derivatives of f with respect to the components of x. Except in linear cases, optimization and equation solving invariably proceed by iteration. Starting from an approximate trial solution, a useful algorithm will gradually re ne the working estimate until a pre-determined level of precision has been reached. If the functions are smooth, a good algorithm can be expected to converge to a solution when given a su ciently good starting value. A good starting value is one of the keys to success. In general, nding a starting value requires heuristics and an analysis of the problem. One strategy for tting complex statistical models, by maximum likelihood or otherwise, is to progress from the simple to the complex in stages. Fit a series of models of increasing complexity, using the simpler model as a starting value for the more complicated model in each case. Maximum likelihood iterations can often be initialized by using a less e cient moment estimator. In some special cases, such as generalized linear models, it is possible to use the data itself as a starting value for the tted values. An extremum (maximumor minimum) of f can be either global (truly the extreme value of f over its range) or local (the extreme value of f in a neighborhood containing the value). (See Figure 1.) Generally it is the global extremum that we want. (A maximum likelihood estimator, for example, is by de nition the global maximum of the likelihood.) Unfortunately, distinguishing local extrema from the global extremum is not an easy task. One heuristic is to start the iteration from several widely varying starting points, and to take the most extreme (if they are not equal). If necessary a large number of starting values can be randomly generated. Another heuristic is to perturb a local extremum slightly to check that the algorithm returns to it. A relaD x1 x2 x f(x)

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Three-terms Conjugate Gradient Algorithm for Solving Large-Scale Systems of Nonlinear Equations

Nonlinear conjugate gradient method is well known in solving large-scale unconstrained optimization problems due to it’s low storage requirement and simple to implement. Research activities on it’s application to handle higher dimensional systems of nonlinear equations are just beginning. This paper presents a Threeterm Conjugate Gradient algorithm for solving Large-Scale systems of nonlinear e...

متن کامل

Optimization of solution stiff differential equations using MHAM and RSK methods

In this paper, a nonlinear stiff differential equation is solved by using the Rosenbrock iterative method, modified homotpy analysis method and power series method. The approximate solution of this equation is calculated in the form of series which its components are computed by applying a recursive relations. Some numerical examples are studied to demonstrate the accuracy of the presented meth...

متن کامل

Solving a non-convex non-linear optimization problem constrained by fuzzy relational equations and Sugeno-Weber family of t-norms

Sugeno-Weber family of t-norms and t-conorms is one of the most applied one in various fuzzy modelling problems. This family of t-norms and t-conorms was suggested by Weber for modeling intersection and union of fuzzy sets. Also, the t-conorms were suggested as addition rules by Sugeno for so-called  $lambda$–fuzzy measures. In this paper, we study a nonlinear optimization problem where the fea...

متن کامل

Optimization of Bistability in Nonlinear Chalcogenide Fiber Bragg Grating for All Optical Switch and Memory Applications

We solve the coupled mode equations governing the chalcogenide nonlinear fiber Bragg gratings (FBGs) numerically, and obtain the bistability characteristics. The characteristics of the chalcogenide nonlinear FBGs such as: switching threshold intensity, bistability interval and on-off switching ratio are studied. The effects of FBG length and its third order nonlinear refractive index on FBG cha...

متن کامل

On Efficiency of Non-Monotone Adaptive Trust Region and Scaled Trust Region Methods in Solving Nonlinear Systems of Equations

In this paper we run two important methods for solving some well-known problems and make a comparison on their performance and efficiency in solving nonlinear systems of equations‎. ‎One of these methods is a non-monotone adaptive trust region strategy and another one is a scaled trust region approach‎. ‎Each of methods showed fast convergence in special problems and slow convergence in other o...

متن کامل

On the optimization of Dombi non-linear programming

Dombi family of t-norms includes a parametric family of continuous strict t-norms, whose members are increasing functions of the parameter. This family of t-norms covers the whole spectrum of t-norms when the parameter is changed from zero to infinity. In this paper, we study a nonlinear optimization problem in which the constraints are defined as fuzzy relational equations (FRE) with the Dombi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002